The objective of this study was to examine how non-autistic and autistic process social rewards, and whether participants would be able to learn whether two individuals were more similar or dissimilar to them. Prior to the fMRI scan, participants completed a survey of their interests (e.g. I am an animal lover). Using this survey, with over 200 responses, we matched other real participants to each other, such that every participant had a similar peer which shared 75% of the same interests, and a dissimilar peer which shared only 25% of the same interests. In the fMRI task, participants were shown an individuals name, either Shiloh or Charlie. These were the similar or dissimilar peers. For each trial, participants were first shown the peer’s name, then their own response to a survey question (e.g. animal lover indicating that the participant had responded that they love animals). Next, participants had to press a button to learn about the shown peer. The feedback consisted of a thumbs up with a “Me too!” message or a thumbs down with a “Not me!” message, indicating whether the peer had shared that interest or not. There was also a computer peer condition in which the computer gave 50% positive feedback as a “match” or “no match”.
This was an event-related design with four runs of the task.
library(dplyr)
library(tidyr)
library(ggplot2)
library(emmeans)
library(lmerTest)
require(gridExtra)
library(multcomp)
library(plyr)
library(sjPlot)
data_dir <- 'derivatives/task_socialreward/data/'
subj_df <- read.csv('participants.tsv', sep = '\t')
# Remove prefix from subject IDs
subj_df$participant_id<-gsub("sub-SCN","SCN_",as.character(subj_df$participant_id))
print(paste('Found ', length(subj_df$participant_id), ' participants'))
## [1] "Found 133 participants"
Task errors: The task programming had an error which would show the participant incorrect options for their own preferences (e.g. would show “like animals”, when the participant said they didn’t like animals). We will remove participants for which this occurred a lot (more than 5 trials).
# Create an empty dataframe to fill with all the participant data
rt_data <- data.frame(matrix(ncol = 6, nrow = 0))
# Name columns for the empty dataframe
colnames(rt_data) <- c('ParticipantID', 'Run', 'X', 'rating', 'ConditionName',
'Correct_RT')
# Import Data
for (subj in subj_df$participant_id) {
# Find data for all runs
run_files <- Sys.glob(paste(data_dir, subj, '/*-errors.csv', sep = ''))
# Loop through runs and combine into one df
for (run_file in run_files){
temp_run_data <- read.csv(run_file)
# Only include participant data if there were no task errors
if (sum(temp_run_data$redcap_v_task) == 0) {
# Filter for relevant columns
temp_run_data_fltr <- temp_run_data[,colnames(rt_data)]
# Append to entire df
rt_data <- rbind(rt_data, temp_run_data_fltr)
}
}
}
# Rename trial number column
names(rt_data)[names(rt_data) == 'X'] <- 'trial_num'
Add group (e.g. autistic, non-autistic) info to reaction time dataframe
rt_data <- merge(rt_data, subj_df,
by.x = 'ParticipantID', by.y = 'participant_id')
# Rename column
rt_data <- rt_data %>% rename_at('ParticipantID', ~'participant_id')
Create column for the peer conditions info and valence of feedback
rt_data <- rt_data %>% separate(ConditionName, c('Valence', 'Condition'))
rt_data$Valence <- gsub('LowReward','negative',as.character(rt_data$Valence))
rt_data$Valence <- gsub('HighReward','positive',as.character(rt_data$Valence))
Calculate the valence of the previous trial within each condition
# Create an empty column of NAs
rt_data$Valence_prev <- NA
for (subj in unique(rt_data$participant_id)) {
# Create a list of the runs for this participant
temp_run_list <- unique(rt_data[rt_data$participant_id == 'SCN_101', 'Run'])
for (run in temp_run_list) {
for (cond in c('SimPeer', 'DisPeer', 'Computer')) {
# Calculate the number of trials for a given condition per run per subject
temp_len <- length(rt_data[rt_data$participant_id == subj &
rt_data$Run == run &
rt_data$Condition == cond, 'Valence'])
# Create a list of the trial valences, excluding the first trial of that run
# since nothing was shown before that trial in the run
temp_val <- rt_data[rt_data$participant_id == subj &
rt_data$Run == run &
rt_data$Condition == cond, 'Valence'][1:temp_len-1]
# Fill in the previous trial valence for that condition
rt_data[rt_data$participant_id == subj &
rt_data$Run == run &
rt_data$Condition == cond, 'Valence_prev'] <- c(NA, temp_val)
}
}
}
Make group data an nominal data type
rt_data$group <- gsub('1','non-autistic',as.character(rt_data$group))
rt_data$group <- gsub('2','autistic',as.character(rt_data$group))
head(rt_data)
demo_info <- read.csv('misc/SCONNChildPacket-IdentifyingInfo_DATA_2024-10-18_1220.csv')
demo_info <- demo_info[demo_info$redcap_event_name == 'time_1_arm_1', ]
demo_info$record_id <- toupper(demo_info$record_id)
demo_info <- demo_info[which((demo_info$record_id %in% unique(rt_data$participant_id))==TRUE),]
mean(as.numeric(demo_info$child_exact_age), na.rm=TRUE)
## Warning in mean(as.numeric(demo_info$child_exact_age), na.rm = TRUE): NAs
## introduced by coercion
## [1] 13.05724
sd(as.numeric(demo_info$child_exact_age), na.rm=TRUE)
## Warning in is.data.frame(x): NAs introduced by coercion
## [1] 1.130972
table(demo_info$child_gender_lab_entered)
##
## 1 2 4 5
## 66 50 2 1
When a participant did not press the correct button, the feedback was “no data”. This occurred in two instances:
In both these instances, the participant was not paying attention and so we should exclude these trials. If this was the case for more than 20% of the run, then we should exclude data from the entire run.
# Find participant runs with more than 20% missed trials
exclude_subj_runs <- data.frame(participant_id = character(), Run = character(), stringsAsFactors = FALSE)
# Create a list of all subject IDs
subj_list <- unique(rt_data$participant_id)
for (subj in subj_list) {
# Filter for subject specific data
temp_subj_data <- rt_data[rt_data$participant_id == subj, ]
# Find runs per subject
temp_runs <- unique(temp_subj_data$Run)
for (run in temp_runs) {
# Filter for run specific data
temp_run_rt <- temp_subj_data[temp_subj_data$Run == run, 'Correct_RT']
# Calculate the percentage of missed trials
temp_na_perc <- sum(is.na(temp_run_rt)) / length(temp_run_rt)
# If more than 20% missed trials, exclude the participant run
if (temp_na_perc > 0.20) {
exclude_subj_runs[nrow(exclude_subj_runs) + 1,] <- c(subj, run)
}
}
}
print(paste('Excluding',nrow(exclude_subj_runs),'runs from',
length(unique(exclude_subj_runs$participant_id)),
'participants'))
## [1] "Excluding 5 runs from 4 participants"
Use the list of bad participant runs to remove that data from the larger dataframe
for (i in 1:nrow(exclude_subj_runs)) {
temp_drop_idx <- which(rt_data$participant_id == exclude_subj_runs[i, "participant_id"] &
rt_data$Run == exclude_subj_runs[i, "Run"])
rt_data <- rt_data[-temp_drop_idx, ]
}
With the remaining data, create column for missing trials. This might be interesting as it could be an indicator of learning. For example, if participants have learned who the dissimilar peer is, perhaps they would respond less for those trials than the similar peer trials. Accurate button presses will be coded as 1, and inaccurate will be coded as 0.
rt_data$accuracy <- 1
rt_data[is.na(rt_data$Correct_RT),'accuracy'] <- 0
table(rt_data$accuracy)
##
## 0 1
## 151 9906
Add run as a categorical variable (don’t end up using)
rt_data$Run.f <- factor(rt_data$Run)
Using method from Jones et al., 2014: “Reaction times to the cue after the wink occurred were z-score transformed to each individual’s mean and standard deviation after first removing outliers (defined as reaction times 3 standard deviations above or below the individual’s mean reaction time) and log transforming each reaction time to satisfy normality assumptions.”
# Create copy of the data
rt_data_mod <- data.frame(rt_data)
# Remove outliers greater than 3SD
#rt_data_mod[( rt_data_mod$ParticipantID %in% "SCN_101" & rt_data_mod$Correct_RT > 3*temp_sd),]
# Log transform
rt_data_mod$Correct_RT_log <- log(rt_data$Correct_RT)
# Create empty column for log zscored data
rt_data_mod$Correct_RT_logz <- NA
for (subj in subj_df$participant_id) {
# Calculate mean and SD
temp_subj_data <- filter(rt_data_mod, participant_id == subj)
temp_mean <- mean(temp_subj_data$Correct_RT_log, na.rm = TRUE)
temp_sd <- sd(temp_subj_data$Correct_RT_log, na.rm = TRUE)
rt_data_mod[(rt_data_mod$participant_id %in% subj),'Correct_RT_logz'] <- (rt_data_mod[(rt_data_mod$participant_id %in% subj),'Correct_RT_log'] - temp_mean)/temp_sd
}
plot1 <- ggplot(rt_data_mod, aes(x=Correct_RT)) + geom_histogram() + theme_classic()
plot2 <- ggplot(rt_data_mod, aes(x=Correct_RT_logz)) + geom_histogram() + theme_classic()
grid.arrange(plot1, plot2, ncol=2)
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
The raw reaction times are not normally distributed and right-skewed. This may be a result of the task design where participants were given a window to view the interest and the peer condition, then a separate window to respond. So all participants were forced to view the condition for the same amount of time. After our log and z-transform, we have a normal distribution for the reaction times.
Lastly, we will set the similar peer condition as the reference group for the regression analyses
rt_data_mod$Condition <- relevel(factor(rt_data_mod$Condition), ref = "SimPeer")
rt_data_mod_typ <- filter(rt_data_mod, group == 'non-autistic')
print(paste("There are ",length(unique(rt_data_mod_typ$participant_id)),
" autistic participants"))
## [1] "There are 88 autistic participants"
Use a linear mixed-effects model to model both the fixed effects of our conditions of interest, and the random effects of the participants. An example of a random effect would be that some participants are just faster to respond in all conditions.
model_run_typ <- lmer(Correct_RT_logz ~ Run + (1 + Run | participant_id),
data = rt_data_mod_typ)
## boundary (singular) fit: see help('isSingular')
summary(model_run_typ)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: Correct_RT_logz ~ Run + (1 + Run | participant_id)
## Data: rt_data_mod_typ
##
## REML criterion at convergence: 19859
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -3.0785 -0.6838 -0.1188 0.5835 5.5009
##
## Random effects:
## Groups Name Variance Std.Dev. Corr
## participant_id (Intercept) 0.1686 0.4107
## Run 0.0273 0.1652 -1.00
## Residual 0.9385 0.9688
## Number of obs: 7112, groups: participant_id, 88
##
## Fixed effects:
## Estimate Std. Error df t value Pr(>|t|)
## (Intercept) 0.24800 0.05296 90.17102 4.683 9.94e-06 ***
## Run -0.10031 0.02085 88.86038 -4.811 6.11e-06 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Correlation of Fixed Effects:
## (Intr)
## Run -0.975
## optimizer (nloptwrap) convergence code: 0 (OK)
## boundary (singular) fit: see help('isSingular')
This analysis shows that participants are getting faster at responding throughout the task (B = -0.09, p = .0002).
ggplot(data = rt_data_mod_typ, aes(x=Run, y=Correct_RT_logz, group=Run)) +
geom_boxplot(outlier.shape = NA) +
geom_jitter(color='black', alpha=0.05) +
theme_classic()
## Warning: Removed 106 rows containing non-finite outside the scale range
## (`stat_boxplot()`).
## Warning: Removed 106 rows containing missing values or values outside the scale range
## (`geom_point()`).
The box plots show this significant tread, and the individual response times for all participants.
Next, we will examine whether there are differences in reaction time to our peer conditions throughout the task. Again, the conditions were were getting feedback from a similar peer (75% positive feedback), dissimilar peer (25% positive feedback), or a random computer (50% positive feedback).
model_run_peer_typ <- lmer(Correct_RT_logz ~ Run*Condition + (1 + Run*Condition | participant_id),
data = rt_data_mod_typ)
## boundary (singular) fit: see help('isSingular')
## Warning: Model failed to converge with 1 negative eigenvalue: -1.6e+00
summary(model_run_peer_typ)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula:
## Correct_RT_logz ~ Run * Condition + (1 + Run * Condition | participant_id)
## Data: rt_data_mod_typ
##
## REML criterion at convergence: 19859.6
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -3.0429 -0.6774 -0.1118 0.5840 5.5432
##
## Random effects:
## Groups Name Variance Std.Dev. Corr
## participant_id (Intercept) 0.170205 0.41256
## Run 0.026183 0.16181 -1.00
## ConditionComputer 0.020188 0.14208 -0.17 0.11
## ConditionDisPeer 0.017179 0.13107 -0.09 0.04 0.77
## Run:ConditionComputer 0.001210 0.03479 0.08 -0.02 -0.96 -0.57
## Run:ConditionDisPeer 0.002593 0.05092 -0.04 0.08 -0.57 -0.96
## Residual 0.934562 0.96673
##
##
##
##
##
##
## 0.35
##
## Number of obs: 7112, groups: participant_id, 88
##
## Fixed effects:
## Estimate Std. Error df t value Pr(>|t|)
## (Intercept) 0.30521 0.06605 97.13767 4.621 1.17e-05 ***
## Run -0.12806 0.02511 98.72647 -5.100 1.64e-06 ***
## ConditionComputer -0.09537 0.06983 103.56827 -1.366 0.1750
## ConditionDisPeer -0.07754 0.06969 222.48966 -1.113 0.2670
## Run:ConditionComputer 0.06596 0.02542 235.88870 2.594 0.0101 *
## Run:ConditionDisPeer 0.01794 0.02579 199.70643 0.696 0.4874
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Correlation of Fixed Effects:
## (Intr) Run CndtnC CndtDP Rn:CnC
## Run -0.952
## CondtnCmptr -0.529 0.461
## ConditnDsPr -0.516 0.450 0.511
## Rn:CndtnCmp 0.472 -0.495 -0.910 -0.457
## Rn:CndtnDsP 0.451 -0.474 -0.460 -0.913 0.491
## optimizer (nloptwrap) convergence code: 0 (OK)
## boundary (singular) fit: see help('isSingular')
Accounting for the peer condition, task run had a significant impact on reaction times (B = -0.11, p < .001), such that participants were getting faster as the task went on. There was no significant main effect of peer condition. There is a trend for an interaction in which participants had faster reaction times throughout the task for the similar peer condition, in comparison to the computer control condition (B = .05, p = 0.05).
ggplot(data = rt_data_mod_typ, aes(x=factor(Run), y=Correct_RT_logz, fill=Condition)) +
geom_boxplot(outlier.shape = NA) +
geom_point(alpha=0.05, aes(fill=Condition),
position = position_jitterdodge(dodge.width = 0.8)) +
theme_classic()
## Warning: Removed 106 rows containing non-finite outside the scale range
## (`stat_boxplot()`).
## Warning: Removed 106 rows containing missing values or values outside the scale range
## (`geom_point()`).
tab_model(model_run_peer_typ)
| Correct_RT_logz | |||
|---|---|---|---|
| Predictors | Estimates | CI | p |
| (Intercept) | 0.31 | 0.18 – 0.43 | <0.001 |
| Run | -0.13 | -0.18 – -0.08 | <0.001 |
| Condition [Computer] | -0.10 | -0.23 – 0.04 | 0.172 |
| Condition [DisPeer] | -0.08 | -0.21 – 0.06 | 0.266 |
|
Run × Condition [Computer] |
0.07 | 0.02 – 0.12 | 0.009 |
| Run × Condition [DisPeer] | 0.02 | -0.03 – 0.07 | 0.487 |
| Random Effects | |||
| σ2 | 0.93 | ||
| τ00 participant_id | 0.17 | ||
| τ11 participant_id.Run | 0.03 | ||
| τ11 participant_id.ConditionComputer | 0.02 | ||
| τ11 participant_id.ConditionDisPeer | 0.02 | ||
| τ11 participant_id.Run:ConditionComputer | 0.00 | ||
| τ11 participant_id.Run:ConditionDisPeer | 0.00 | ||
| ρ01 | -1.00 | ||
| -0.17 | |||
| -0.09 | |||
| 0.08 | |||
| -0.04 | |||
| N participant_id | 88 | ||
| Observations | 7112 | ||
| Marginal R2 / Conditional R2 | 0.016 / NA | ||
plot_model(model_run_peer_typ, type="pred", terms=c("Run","Condition"),
pred.type="re", ci.lvl=NA) +
theme_classic()
## It seems that unit-level predictions are requested (`type = "random"`),
## but no random effects terms (grouping variables) are defined in the
## `terms` argument. Either add a random effects term to the `terms`
## argument, or set `type = "fixed"` to get meaningful results (in this
## case, population-level predictions).
rt_data_mod_typ_means <- aggregate(Correct_RT_logz ~ participant_id * Run * Condition, rt_data_mod_typ, mean)
rt_data_mod_typ_means$Condition <- factor(rt_data_mod_typ_means$Condition,
levels=c('SimPeer', 'DisPeer', 'Computer'))
ggplot(data=rt_data_mod_typ_means, aes(x=factor(Run), y=Correct_RT_logz,
group=Condition, color=Condition)) +
geom_smooth(method='lm') +
geom_jitter(alpha=0.2) +
scale_color_manual(values=c('#1f77b4', '#ff7f0e', '#2ca02c'),
labels=c('Similar Peer', 'Dissimilar Peer', 'Computer')) +
theme_classic()
## `geom_smooth()` using formula = 'y ~ x'
Another way we can look at reaction time is to examine if reaction times would be effected by the valence of the previous trial from the same condition. For example, when considering to learn about “Shiloh” in the current trial, if Shiloh had given negative feedback in their last trial, perhaps the participants would be less motivated to learn about Shiloh in the current trial.
model_run_prev_val_typ <- lmer(Correct_RT_logz ~ Run*Valence_prev + (1 + Run*Valence_prev | participant_id),
data = rt_data_mod_typ)
## boundary (singular) fit: see help('isSingular')
summary(model_run_prev_val_typ)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: Correct_RT_logz ~ Run * Valence_prev + (1 + Run * Valence_prev |
## participant_id)
## Data: rt_data_mod_typ
##
## REML criterion at convergence: 17445
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -3.0892 -0.6810 -0.1171 0.5770 5.4404
##
## Random effects:
## Groups Name Variance Std.Dev. Corr
## participant_id (Intercept) 0.190490 0.43645
## Run 0.031131 0.17644 -0.99
## Valence_prevpositive 0.001538 0.03922 0.02 0.11
## Run:Valence_prevpositive 0.004016 0.06337 -0.04 -0.09 -1.00
## Residual 0.937827 0.96841
## Number of obs: 6231, groups: participant_id, 88
##
## Fixed effects:
## Estimate Std. Error df t value Pr(>|t|)
## (Intercept) 1.485e-01 6.402e-02 9.738e+01 2.320 0.02242 *
## Run -7.004e-02 2.496e-02 9.327e+01 -2.807 0.00609 **
## Valence_prevpositive 3.016e-02 5.984e-02 2.049e+03 0.504 0.61426
## Run:Valence_prevpositive 2.739e-03 2.324e-02 1.817e+02 0.118 0.90631
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Correlation of Fixed Effects:
## (Intr) Run Vlnc_p
## Run -0.955
## Vlnc_prvpst -0.472 0.416
## Rn:Vlnc_prv 0.403 -0.452 -0.888
## optimizer (nloptwrap) convergence code: 0 (OK)
## boundary (singular) fit: see help('isSingular')
ggplot(data = subset(rt_data_mod_typ, !is.na(Valence_prev)), aes(x=factor(Run), y=Correct_RT_logz, fill=Valence_prev)) +
geom_boxplot(outlier.shape = NA) +
geom_point(alpha=0.05, aes(fill=Valence_prev),
position = position_jitterdodge(dodge.width = 0.8)) +
theme_classic()
## Warning: Removed 93 rows containing non-finite outside the scale range
## (`stat_boxplot()`).
## Warning: Removed 93 rows containing missing values or values outside the scale range
## (`geom_point()`).
The valence of the previous within condition trial did not have an impact on reaction times.
Since the previous analysis failed, maybe explicitly adding an interaction for the peer condition will have an impact on reaction times. For example, the valence of the trial might be more salient when it is a similar peer who previously gave you positive feedback or a dissimilar peer who previously gave you negative feedback.
model_run_prev_val_cond_typ <- lmer(Correct_RT_logz ~ Run*Condition*Valence_prev + (1 + Run*Condition*Valence_prev | participant_id),
data = rt_data_mod_typ)
## boundary (singular) fit: see help('isSingular')
## Warning: Model failed to converge with 1 negative eigenvalue: -2.0e+00
summary(model_run_prev_val_cond_typ)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: Correct_RT_logz ~ Run * Condition * Valence_prev + (1 + Run *
## Condition * Valence_prev | participant_id)
## Data: rt_data_mod_typ
##
## REML criterion at convergence: 17437.4
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -3.0624 -0.6742 -0.1177 0.5713 5.2332
##
## Random effects:
## Groups Name Variance Std.Dev.
## participant_id (Intercept) 0.45613 0.6754
## Run 0.05781 0.2404
## ConditionComputer 0.43219 0.6574
## ConditionDisPeer 0.21277 0.4613
## Valence_prevpositive 0.30267 0.5502
## Run:ConditionComputer 0.03979 0.1995
## Run:ConditionDisPeer 0.01167 0.1080
## Run:Valence_prevpositive 0.03694 0.1922
## ConditionComputer:Valence_prevpositive 0.62642 0.7915
## ConditionDisPeer:Valence_prevpositive 0.21018 0.4584
## Run:ConditionComputer:Valence_prevpositive 0.06218 0.2494
## Run:ConditionDisPeer:Valence_prevpositive 0.03466 0.1862
## Residual 0.91726 0.9577
## Corr
##
## -0.94
## -0.72 0.54
## -0.78 0.60 0.95
## -0.77 0.63 0.95 0.95
## 0.70 -0.62 -0.92 -0.85 -0.95
## 0.81 -0.77 -0.84 -0.90 -0.90 0.88
## 0.65 -0.69 -0.70 -0.75 -0.84 0.85 0.93
## 0.70 -0.56 -0.95 -0.84 -0.92 0.96 0.78 0.69
## 0.78 -0.67 -0.93 -0.90 -0.94 0.93 0.91 0.80 0.88
## -0.65 0.63 0.81 0.73 0.88 -0.96 -0.81 -0.86 -0.92 -0.82
## -0.59 0.68 0.42 0.44 0.52 -0.57 -0.69 -0.70 -0.39 -0.73 0.50
##
## Number of obs: 6231, groups: participant_id, 88
##
## Fixed effects:
## Estimate Std. Error df
## (Intercept) 0.157411 0.120746 95.808286
## Run -0.082378 0.044043 102.133783
## ConditionComputer 0.011158 0.141187 89.844670
## ConditionDisPeer -0.031319 0.123656 132.410944
## Valence_prevpositive 0.044741 0.128278 107.288745
## Run:ConditionComputer 0.029138 0.049901 100.397098
## Run:ConditionDisPeer 0.008497 0.043327 247.749787
## Run:Valence_prevpositive -0.003144 0.046986 104.460044
## ConditionComputer:Valence_prevpositive -0.067401 0.175707 94.277079
## ConditionDisPeer:Valence_prevpositive 0.003701 0.168029 221.768723
## Run:ConditionComputer:Valence_prevpositive 0.028004 0.062795 101.789465
## Run:ConditionDisPeer:Valence_prevpositive -0.016190 0.062810 163.699950
## t value Pr(>|t|)
## (Intercept) 1.304 0.1955
## Run -1.870 0.0643 .
## ConditionComputer 0.079 0.9372
## ConditionDisPeer -0.253 0.8004
## Valence_prevpositive 0.349 0.7279
## Run:ConditionComputer 0.584 0.5606
## Run:ConditionDisPeer 0.196 0.8447
## Run:Valence_prevpositive -0.067 0.9468
## ConditionComputer:Valence_prevpositive -0.384 0.7021
## ConditionDisPeer:Valence_prevpositive 0.022 0.9824
## Run:ConditionComputer:Valence_prevpositive 0.446 0.6566
## Run:ConditionDisPeer:Valence_prevpositive -0.258 0.7969
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Correlation of Fixed Effects:
## (Intr) Run CndtnC CndtDP Vlnc_p Rn:CnC Rn:CDP Rn:Vl_ CnC:V_
## Run -0.922
## CondtnCmptr -0.762 0.658
## ConditnDsPr -0.805 0.709 0.722
## Vlnc_prvpst -0.811 0.721 0.734 0.757
## Rn:CndtnCmp 0.698 -0.730 -0.910 -0.649 -0.677
## Rn:CndtnDsP 0.720 -0.776 -0.620 -0.900 -0.667 0.680
## Rn:Vlnc_prv 0.723 -0.790 -0.625 -0.666 -0.895 0.702 0.726
## CndtnCmp:V_ 0.643 -0.564 -0.835 -0.591 -0.784 0.775 0.509 0.679
## CndtnDsP:V_ 0.592 -0.534 -0.527 -0.723 -0.724 0.489 0.663 0.653 0.563
## Rn:CndtC:V_ -0.581 0.617 0.741 0.526 0.718 -0.828 -0.555 -0.765 -0.910
## Rn:CndDP:V_ -0.522 0.583 0.417 0.611 0.615 -0.480 -0.701 -0.696 -0.453
## CDP:V_ R:CC:V
## Run
## CondtnCmptr
## ConditnDsPr
## Vlnc_prvpst
## Rn:CndtnCmp
## Rn:CndtnDsP
## Rn:Vlnc_prv
## CndtnCmp:V_
## CndtnDsP:V_
## Rn:CndtC:V_ -0.514
## Rn:CndDP:V_ -0.893 0.513
## optimizer (nloptwrap) convergence code: 0 (OK)
## boundary (singular) fit: see help('isSingular')
ggplot(data = subset(rt_data_mod_typ, !is.na(Valence_prev)), aes(x=factor(Run), y=Correct_RT_logz, fill=Condition, alpha=Valence_prev)) +
geom_boxplot(outlier.shape = NA) +
theme_classic()
## Warning: Using alpha for a discrete variable is not advised.
## Warning: Removed 93 rows containing non-finite outside the scale range
## (`stat_boxplot()`).
Makes no difference.
In the later runs, is there a difference in reaction time between the similar and dissimilar peer conditions? How many participants show this learning effect?
rt_means_subj_cond_run_typ <- ddply(rt_data_mod_typ, c('participant_id','Condition','Run'),
summarise,
mean=mean(Correct_RT, na.rm=TRUE))
Calculate the difference in RT for similar and dissimilar peer conditions at run 4
r4_peer_diff_typ <- data.frame(matrix(ncol = 2, nrow = 0))
# Name columns for the empty dataframe
colnames(r4_peer_diff_typ) <- c('ParticipantID', 'sim_dis_RT')
for (subj in unique(rt_means_subj_cond_run_typ$participant_id)) {
temp_subj_data <- rt_means_subj_cond_run_typ[rt_means_subj_cond_run_typ$participant_id == subj, ]
temp_last_run <- temp_subj_data$Run[length(unique(temp_subj_data$Run))]
temp_run_data <- temp_subj_data[temp_subj_data$Run == temp_last_run, ]
temp_diff <- temp_run_data[temp_run_data$Condition == 'SimPeer', 'mean'] - temp_run_data[temp_run_data$Condition == 'DisPeer', 'mean']
r4_peer_diff_typ[nrow(r4_peer_diff_typ) + 1,] = c(subj,temp_diff)
}
r4_peer_diff_typ$sim_dis_RT <- as.numeric(r4_peer_diff_typ$sim_dis_RT)
ggplot(r4_peer_diff_typ, aes(x=sim_dis_RT)) + geom_histogram() + theme_classic()
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
prop_learn <- nrow(r4_peer_diff_typ[r4_peer_diff_typ$sim_dis_RT < 0, ]) / nrow(r4_peer_diff_typ)
print(paste(round(prop_learn, 4)*100, '% of participants had faster RTs for similar peers than dissimilar peers for their last run.'))
## [1] "48.86 % of participants had faster RTs for similar peers than dissimilar peers for their last run."
t.test(r4_peer_diff_typ$sim_dis_RT, alternative = "two.sided", var.equal = FALSE)
##
## One Sample t-test
##
## data: r4_peer_diff_typ$sim_dis_RT
## t = -0.76751, df = 87, p-value = 0.4449
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## -0.03372827 0.01493645
## sample estimates:
## mean of x
## -0.00939591
Note that previous valence here is the previous valence of the trial within the same condition.
rt_means_subj_pval_run_typ <- ddply(rt_data_mod_typ, c('participant_id','Valence_prev','Run'),
summarise,
mean=mean(Correct_RT, na.rm=TRUE))
# Remove rows with NA
rt_means_subj_pval_run_typ <- rt_means_subj_pval_run_typ[complete.cases(rt_means_subj_pval_run_typ), ]
Calculate the difference in RT for similar and dissimilar peer conditions at run 4
# Create a new column for for previous valence
r4_peer_diff_typ$pos_neg_RT <- NA
for (subj in unique(rt_means_subj_pval_run_typ$participant_id)) {
temp_subj_data <- rt_means_subj_pval_run_typ[rt_means_subj_pval_run_typ$participant_id == subj, ]
temp_last_run <- temp_subj_data$Run[length(unique(temp_subj_data$Run))]
temp_run_data <- temp_subj_data[temp_subj_data$Run == temp_last_run, ]
temp_diff <- temp_run_data[temp_run_data$Valence_prev == 'positive', 'mean'] - temp_run_data[temp_run_data$Valence_prev == 'negative', 'mean']
r4_peer_diff_typ[r4_peer_diff_typ$ParticipantID == subj, 'pos_neg_RT'] = temp_diff
}
r4_peer_diff_typ$pos_neg_RT <- as.numeric(r4_peer_diff_typ$pos_neg_RT)
ggplot(r4_peer_diff_typ, aes(x=pos_neg_RT)) + geom_histogram() + theme_classic()
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
prop_learn <- nrow(r4_peer_diff_typ[r4_peer_diff_typ$pos_neg_RT < 0, ]) / nrow(r4_peer_diff_typ)
print(paste(round(prop_learn, 4)*100, '% of participants had faster RTs for trials that were precided by a positive outcome than negative outcomes, for their last run.'))
## [1] "43.18 % of participants had faster RTs for trials that were precided by a positive outcome than negative outcomes, for their last run."
t.test(r4_peer_diff_typ$pos_neg_RT, alternative = "two.sided", var.equal = FALSE)
##
## One Sample t-test
##
## data: r4_peer_diff_typ$pos_neg_RT
## t = 1.0052, df = 87, p-value = 0.3176
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## -0.01151998 0.03509535
## sample estimates:
## mean of x
## 0.01178769
Use a linear mixed-effects model to model both the fixed effects of our conditions of interest, and the random effects of the participants. An example of a random effect would be that some participants are just faster to respond in all conditions.
model_run <- lmer(Correct_RT_logz ~ Run + (1 + Run | participant_id),
data = rt_data_mod)
## boundary (singular) fit: see help('isSingular')
summary(model_run)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: Correct_RT_logz ~ Run + (1 + Run | participant_id)
## Data: rt_data_mod
##
## REML criterion at convergence: 27715.8
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -3.0696 -0.6933 -0.1198 0.5860 5.4790
##
## Random effects:
## Groups Name Variance Std.Dev. Corr
## participant_id (Intercept) 0.15676 0.3959
## Run 0.02551 0.1597 -1.00
## Residual 0.94460 0.9719
## Number of obs: 9906, groups: participant_id, 121
##
## Fixed effects:
## Estimate Std. Error df t value Pr(>|t|)
## (Intercept) 0.20880 0.04369 125.58049 4.779 4.83e-06 ***
## Run -0.08479 0.01723 123.59072 -4.922 2.68e-06 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Correlation of Fixed Effects:
## (Intr)
## Run -0.974
## optimizer (nloptwrap) convergence code: 0 (OK)
## boundary (singular) fit: see help('isSingular')
This analysis shows that participants are getting faster at responding throughout the task (B = -0.09, p = .0002).
ggplot(data = rt_data_mod, aes(x=Run, y=Correct_RT_logz, group=Run)) +
geom_boxplot(outlier.shape = NA) +
geom_jitter(color='black', alpha=0.05) +
theme_classic()
## Warning: Removed 151 rows containing non-finite outside the scale range
## (`stat_boxplot()`).
## Warning: Removed 151 rows containing missing values or values outside the scale range
## (`geom_point()`).
The box plots show this significant tread, and the individual response times for all participants.
Next, we will examine whether there are differences in reaction time to our peer conditions throughout the task. Again, the conditions were were getting feedback from a similar peer (75% positive feedback), dissimilar peer (25% positive feedback), or a random computer (50% positive feedback).
model_run_peer <- lmer(Correct_RT_logz ~ Run*Condition + (1 + Run*Condition | participant_id),
data = rt_data_mod)
## boundary (singular) fit: see help('isSingular')
summary(model_run_peer)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula:
## Correct_RT_logz ~ Run * Condition + (1 + Run * Condition | participant_id)
## Data: rt_data_mod
##
## REML criterion at convergence: 27717.4
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -3.0430 -0.6849 -0.1152 0.5895 5.5020
##
## Random effects:
## Groups Name Variance Std.Dev. Corr
## participant_id (Intercept) 0.1756666 0.41913
## Run 0.0257855 0.16058 -0.99
## ConditionComputer 0.0294911 0.17173 -0.40 0.29
## ConditionDisPeer 0.0107274 0.10357 -0.27 0.16 0.99
## Run:ConditionComputer 0.0016911 0.04112 0.46 -0.40 -0.83
## Run:ConditionDisPeer 0.0003615 0.01901 -0.48 0.58 -0.61
## Residual 0.9404171 0.96975
##
##
##
##
##
## -0.83
## -0.71 0.38
##
## Number of obs: 9906, groups: participant_id, 121
##
## Fixed effects:
## Estimate Std. Error df t value Pr(>|t|)
## (Intercept) 0.25646 0.05626 134.81652 4.558 1.14e-05 ***
## Run -0.10967 0.02118 139.17113 -5.179 7.66e-07 ***
## ConditionComputer -0.06876 0.05977 177.90767 -1.151 0.2515
## ConditionDisPeer -0.07572 0.05839 472.88265 -1.297 0.1953
## Run:ConditionComputer 0.05147 0.02165 288.50265 2.378 0.0181 *
## Run:ConditionDisPeer 0.02386 0.02136 2367.29961 1.117 0.2642
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Correlation of Fixed Effects:
## (Intr) Run CndtnC CndtDP Rn:CnC
## Run -0.949
## CondtnCmptr -0.566 0.494
## ConditnDsPr -0.533 0.466 0.517
## Rn:CndtnCmp 0.513 -0.542 -0.902 -0.464
## Rn:CndtnDsP 0.434 -0.464 -0.449 -0.904 0.494
## optimizer (nloptwrap) convergence code: 0 (OK)
## boundary (singular) fit: see help('isSingular')
Accounting for the peer condition, task run had a significant impact on reaction times (B = -0.11, p < .001), such that participants were getting faster as the task went on. There was no significant main effect of peer condition. There is a trend for an interaction in which participants had faster reaction times throughout the task for the similar peer condition, in comparison to the computer control condition (B = .05, p = 0.05).
ggplot(data = rt_data_mod, aes(x=factor(Run), y=Correct_RT_logz, fill=Condition)) +
geom_boxplot(outlier.shape = NA) +
geom_point(alpha=0.05, aes(fill=Condition),
position = position_jitterdodge(dodge.width = 0.8)) +
theme_classic()
## Warning: Removed 151 rows containing non-finite outside the scale range
## (`stat_boxplot()`).
## Warning: Removed 151 rows containing missing values or values outside the scale range
## (`geom_point()`).
Another way we can look at reaction time is to examine if reaction times would be effected by the valence of the previous trial from the same condition. For example, when considering to learn about “Shiloh” in the current trial, if Shiloh had given negative feedback in their last trial, perhaps the participants would be less motivated to learn about Shiloh in the current trial.
model_run_prev_val <- lmer(Correct_RT_logz ~ Run*Valence_prev + (1 + Run*Valence_prev | participant_id),
data = rt_data_mod)
## boundary (singular) fit: see help('isSingular')
summary(model_run_prev_val)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: Correct_RT_logz ~ Run * Valence_prev + (1 + Run * Valence_prev |
## participant_id)
## Data: rt_data_mod
##
## REML criterion at convergence: 24314.4
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -3.0901 -0.6932 -0.1166 0.5836 5.4291
##
## Random effects:
## Groups Name Variance Std.Dev. Corr
## participant_id (Intercept) 0.1565248 0.39563
## Run 0.0268273 0.16379 -0.99
## Valence_prevpositive 0.0003587 0.01894 0.99 -0.99
## Run:Valence_prevpositive 0.0028865 0.05373 -0.14 -0.02 -0.11
## Residual 0.9418727 0.97050
## Number of obs: 8676, groups: participant_id, 121
##
## Fixed effects:
## Estimate Std. Error df t value Pr(>|t|)
## (Intercept) 9.986e-02 5.142e-02 1.373e+02 1.942 0.0542 .
## Run -5.127e-02 2.026e-02 1.302e+02 -2.531 0.0126 *
## Valence_prevpositive 4.872e-02 5.045e-02 6.825e+03 0.966 0.3343
## Run:Valence_prevpositive -4.437e-03 1.940e-02 3.057e+02 -0.229 0.8192
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Correlation of Fixed Effects:
## (Intr) Run Vlnc_p
## Run -0.948
## Vlnc_prvpst -0.475 0.402
## Rn:Vlnc_prv 0.413 -0.460 -0.879
## optimizer (nloptwrap) convergence code: 0 (OK)
## boundary (singular) fit: see help('isSingular')
ggplot(data = subset(rt_data_mod, !is.na(Valence_prev)), aes(x=factor(Run), y=Correct_RT_logz, fill=Valence_prev)) +
geom_boxplot(outlier.shape = NA) +
geom_point(alpha=0.05, aes(fill=Valence_prev),
position = position_jitterdodge(dodge.width = 0.8)) +
theme_classic()
## Warning: Removed 136 rows containing non-finite outside the scale range
## (`stat_boxplot()`).
## Warning: Removed 136 rows containing missing values or values outside the scale range
## (`geom_point()`).
The valence of the previous within condition trial did not have an impact on reaction times.
Since the previous analysis failed, maybe explicitly adding an interaction for the peer condition will have an impact on reaction times. For example, the valence of the trial might be more salient when it is a similar peer who previously gave you positive feedback or a dissimilar peer who previously gave you negative feedback.
model_run_prev_val_cond <- lmer(Correct_RT_logz ~ Run*Condition*Valence_prev + (1 + Run*Condition*Valence_prev | participant_id),
data = rt_data_mod)
## boundary (singular) fit: see help('isSingular')
## Warning: Model failed to converge with 2 negative eigenvalues: -5.1e-01
## -5.6e+00
summary(model_run_prev_val_cond)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: Correct_RT_logz ~ Run * Condition * Valence_prev + (1 + Run *
## Condition * Valence_prev | participant_id)
## Data: rt_data_mod
##
## REML criterion at convergence: 24306.2
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -3.0824 -0.6826 -0.1139 0.5796 5.1997
##
## Random effects:
## Groups Name Variance Std.Dev.
## participant_id (Intercept) 0.351893 0.59321
## Run 0.046369 0.21534
## ConditionComputer 0.226190 0.47559
## ConditionDisPeer 0.160376 0.40047
## Valence_prevpositive 0.189033 0.43478
## Run:ConditionComputer 0.022199 0.14899
## Run:ConditionDisPeer 0.009739 0.09869
## Run:Valence_prevpositive 0.031398 0.17720
## ConditionComputer:Valence_prevpositive 0.224892 0.47423
## ConditionDisPeer:Valence_prevpositive 0.152987 0.39114
## Run:ConditionComputer:Valence_prevpositive 0.032105 0.17918
## Run:ConditionDisPeer:Valence_prevpositive 0.031590 0.17774
## Residual 0.923712 0.96110
## Corr
##
## -0.94
## -0.74 0.56
## -0.74 0.56 1.00
## -0.71 0.59 0.94 0.94
## 0.72 -0.67 -0.87 -0.88 -0.97
## 0.67 -0.63 -0.89 -0.89 -0.91 0.91
## 0.52 -0.62 -0.62 -0.63 -0.79 0.86 0.86
## 0.64 -0.56 -0.84 -0.84 -0.87 0.90 0.79 0.65
## 0.62 -0.56 -0.81 -0.81 -0.79 0.80 0.90 0.66 0.81
## -0.51 0.60 0.57 0.57 0.76 -0.87 -0.70 -0.88 -0.81 -0.57
## -0.49 0.60 0.45 0.46 0.49 -0.56 -0.75 -0.67 -0.39 -0.81 0.41
##
## Number of obs: 8676, groups: participant_id, 121
##
## Fixed effects:
## Estimate Std. Error df
## (Intercept) 1.474e-01 9.766e-02 1.425e+02
## Run -7.210e-02 3.600e-02 1.542e+02
## ConditionComputer -3.610e-02 1.115e-01 1.525e+02
## ConditionDisPeer -7.704e-02 1.024e-01 2.104e+02
## Valence_prevpositive 1.880e-02 1.038e-01 1.806e+02
## Run:ConditionComputer 3.148e-02 4.041e-02 2.067e+02
## Run:ConditionDisPeer 2.350e-02 3.651e-02 3.819e+02
## Run:Valence_prevpositive -1.329e-03 3.919e-02 1.604e+02
## ConditionComputer:Valence_prevpositive 8.086e-04 1.362e-01 1.858e+02
## ConditionDisPeer:Valence_prevpositive 4.549e-02 1.402e-01 3.601e+02
## Run:ConditionComputer:Valence_prevpositive 1.773e-02 5.063e-02 2.283e+02
## Run:ConditionDisPeer:Valence_prevpositive -2.051e-02 5.285e-02 2.482e+02
## t value Pr(>|t|)
## (Intercept) 1.510 0.133
## Run -2.003 0.047 *
## ConditionComputer -0.324 0.747
## ConditionDisPeer -0.752 0.453
## Valence_prevpositive 0.181 0.856
## Run:ConditionComputer 0.779 0.437
## Run:ConditionDisPeer 0.644 0.520
## Run:Valence_prevpositive -0.034 0.973
## ConditionComputer:Valence_prevpositive 0.006 0.995
## ConditionDisPeer:Valence_prevpositive 0.325 0.746
## Run:ConditionComputer:Valence_prevpositive 0.350 0.727
## Run:ConditionDisPeer:Valence_prevpositive -0.388 0.698
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Correlation of Fixed Effects:
## (Intr) Run CndtnC CndtDP Vlnc_p Rn:CnC Rn:CDP Rn:Vl_ CnC:V_
## Run -0.917
## CondtnCmptr -0.763 0.669
## ConditnDsPr -0.802 0.707 0.713
## Vlnc_prvpst -0.799 0.718 0.709 0.746
## Rn:CndtnCmp 0.695 -0.744 -0.902 -0.638 -0.654
## Rn:CndtnDsP 0.709 -0.767 -0.626 -0.902 -0.668 0.683
## Rn:Vlnc_prv 0.698 -0.784 -0.607 -0.643 -0.888 0.692 0.717
## CndtnCmp:V_ 0.607 -0.549 -0.797 -0.565 -0.755 0.742 0.505 0.666
## CndtnDsP:V_ 0.564 -0.515 -0.498 -0.710 -0.705 0.457 0.659 0.631 0.544
## Rn:CndtC:V_ -0.539 0.603 0.700 0.491 0.683 -0.803 -0.541 -0.756 -0.899
## Rn:CndDP:V_ -0.511 0.574 0.427 0.619 0.620 -0.478 -0.708 -0.693 -0.465
## CDP:V_ R:CC:V
## Run
## CondtnCmptr
## ConditnDsPr
## Vlnc_prvpst
## Rn:CndtnCmp
## Rn:CndtnDsP
## Rn:Vlnc_prv
## CndtnCmp:V_
## CndtnDsP:V_
## Rn:CndtC:V_ -0.480
## Rn:CndDP:V_ -0.900 0.508
## optimizer (nloptwrap) convergence code: 0 (OK)
## boundary (singular) fit: see help('isSingular')
ggplot(data = subset(rt_data_mod, !is.na(Valence_prev)), aes(x=factor(Run), y=Correct_RT_logz, fill=Condition, alpha=Valence_prev)) +
geom_boxplot(outlier.shape = NA) +
theme_classic()
## Warning: Using alpha for a discrete variable is not advised.
## Warning: Removed 136 rows containing non-finite outside the scale range
## (`stat_boxplot()`).
Makes no difference.
In the later runs, is there a difference in reaction time between the similar and dissimilar peer conditions? How many participants show this learning effect?
rt_means_subj_cond_run <- ddply(rt_data_mod,c('participant_id','Condition','Run'),
summarise,
mean=mean(Correct_RT, na.rm=TRUE))
Calculate the difference in RT for similar and dissimilar peer conditions at run 4
r4_peer_diff <- data.frame(matrix(ncol = 2, nrow = 0))
# Name columns for the empty dataframe
colnames(r4_peer_diff) <- c('ParticipantID', 'sim_dis_RT')
for (subj in unique(rt_means_subj_cond_run$participant_id)) {
temp_subj_data <- rt_means_subj_cond_run[rt_means_subj_cond_run$participant_id == subj, ]
temp_last_run <- temp_subj_data$Run[length(unique(temp_subj_data$Run))]
temp_run_data <- temp_subj_data[temp_subj_data$Run == temp_last_run, ]
temp_diff <- temp_run_data[temp_run_data$Condition == 'SimPeer', 'mean'] - temp_run_data[temp_run_data$Condition == 'DisPeer', 'mean']
r4_peer_diff[nrow(r4_peer_diff) + 1,] = c(subj,temp_diff)
}
r4_peer_diff$sim_dis_RT <- as.numeric(r4_peer_diff$sim_dis_RT)
ggplot(r4_peer_diff, aes(x=sim_dis_RT)) + geom_histogram() + theme_classic()
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
prop_learn <- nrow(r4_peer_diff[r4_peer_diff$sim_dis_RT < 0, ]) / nrow(r4_peer_diff)
print(paste(round(prop_learn, 4)*100, '% of participants had faster RTs for similar peers than dissimilar peers for their last run.'))
## [1] "52.07 % of participants had faster RTs for similar peers than dissimilar peers for their last run."
t.test(r4_peer_diff$sim_dis_RT, alternative = "two.sided", var.equal = FALSE)
##
## One Sample t-test
##
## data: r4_peer_diff$sim_dis_RT
## t = -1.035, df = 120, p-value = 0.3027
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## -0.031074516 0.009738864
## sample estimates:
## mean of x
## -0.01066783
Note that previous valence here is the previous valence of the trial within the same condition.
rt_means_subj_pval_run <- ddply(rt_data_mod,c('participant_id','Valence_prev','Run'),
summarise,
mean=mean(Correct_RT, na.rm=TRUE))
# Remove rows with NA
rt_means_subj_pval_run <- rt_means_subj_pval_run[complete.cases(rt_means_subj_pval_run), ]
Calculate the difference in RT for similar and dissimilar peer conditions at run 4
# Create a new column for for previous valence
r4_peer_diff$pos_neg_RT <- NA
for (subj in unique(rt_means_subj_pval_run$participant_id)) {
temp_subj_data <- rt_means_subj_pval_run[rt_means_subj_pval_run$participant_id == subj, ]
temp_last_run <- temp_subj_data$Run[length(unique(temp_subj_data$Run))]
temp_run_data <- temp_subj_data[temp_subj_data$Run == temp_last_run, ]
temp_diff <- temp_run_data[temp_run_data$Valence_prev == 'positive', 'mean'] - temp_run_data[temp_run_data$Valence_prev == 'negative', 'mean']
r4_peer_diff[r4_peer_diff$ParticipantID == subj, 'pos_neg_RT'] = temp_diff
}
r4_peer_diff$pos_neg_RT <- as.numeric(r4_peer_diff$pos_neg_RT)
ggplot(r4_peer_diff, aes(x=pos_neg_RT)) + geom_histogram() + theme_classic()
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
prop_learn <- nrow(r4_peer_diff[r4_peer_diff$pos_neg_RT < 0, ]) / nrow(r4_peer_diff)
print(paste(round(prop_learn, 4)*100, '% of participants had faster RTs for trials that were precided by a positive outcome than negative outcomes, for their last run.'))
## [1] "41.32 % of participants had faster RTs for trials that were precided by a positive outcome than negative outcomes, for their last run."
t.test(r4_peer_diff$sim_dis_RT, alternative = "two.sided", var.equal = FALSE)
##
## One Sample t-test
##
## data: r4_peer_diff$sim_dis_RT
## t = -1.035, df = 120, p-value = 0.3027
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## -0.031074516 0.009738864
## sample estimates:
## mean of x
## -0.01066783
Next, we will analyze run by condition interactions for non-autistic adolescents and autistic adolescents, individually.
rt_data_mod_asd <- filter(rt_data_mod, group == 'autistic')
print(paste("There are ",length(unique(rt_data_mod_asd$participant_id)),
" autistic participants"))
## [1] "There are 33 autistic participants"
model_run_peer_asd <- lmer(Correct_RT_logz ~ Run*Condition + (1 + Run | participant_id), data = rt_data_mod_asd)
## boundary (singular) fit: see help('isSingular')
summary(model_run_peer_asd)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: Correct_RT_logz ~ Run * Condition + (1 + Run | participant_id)
## Data: rt_data_mod_asd
##
## REML criterion at convergence: 7881
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -2.6229 -0.7230 -0.1220 0.5973 4.1695
##
## Random effects:
## Groups Name Variance Std.Dev. Corr
## participant_id (Intercept) 0.1210 0.3479
## Run 0.0201 0.1418 -1.00
## Residual 0.9612 0.9804
## Number of obs: 2794, groups: participant_id, 33
##
## Fixed effects:
## Estimate Std. Error df t value Pr(>|t|)
## (Intercept) 0.13760 0.09720 94.40686 1.416 0.160
## Run -0.06461 0.03756 83.72797 -1.720 0.089 .
## ConditionComputer -0.01016 0.10793 2758.04356 -0.094 0.925
## ConditionDisPeer -0.07354 0.10780 2758.03994 -0.682 0.495
## Run:ConditionComputer 0.01773 0.04001 2758.04464 0.443 0.658
## Run:ConditionDisPeer 0.03979 0.04000 2758.12696 0.995 0.320
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Correlation of Fixed Effects:
## (Intr) Run CndtnC CndtDP Rn:CnC
## Run -0.944
## CondtnCmptr -0.547 0.477
## ConditnDsPr -0.548 0.478 0.494
## Rn:CndtnCmp 0.497 -0.527 -0.907 -0.448
## Rn:CndtnDsP 0.497 -0.528 -0.448 -0.907 0.495
## optimizer (nloptwrap) convergence code: 0 (OK)
## boundary (singular) fit: see help('isSingular')
In autistic adolescents, there are no significant main effects or interactions.
m.lst <- emtrends(model_run_peer_asd, "Condition", var="Run")
pairs(m.lst)
## contrast estimate SE df t.ratio p.value
## SimPeer - Computer -0.0177 0.0400 2729 -0.443 0.8975
## SimPeer - DisPeer -0.0398 0.0400 2730 -0.995 0.5802
## Computer - DisPeer -0.0221 0.0402 2729 -0.549 0.8471
##
## Degrees-of-freedom method: kenward-roger
## P value adjustment: tukey method for comparing a family of 3 estimates
ggplot(data = rt_data_mod_asd, aes(x=factor(Run), y=Correct_RT_logz,
fill=Condition)) +
geom_boxplot(outlier.shape = NA) +
geom_point(alpha=0.05, aes(fill=Condition),
position = position_jitterdodge(dodge.width = 0.8)) +
theme_classic()
## Warning: Removed 45 rows containing non-finite outside the scale range
## (`stat_boxplot()`).
## Warning: Removed 45 rows containing missing values or values outside the scale range
## (`geom_point()`).
rt_data_mod_asd_means <- aggregate(Correct_RT_logz ~ participant_id * Run * Condition, rt_data_mod_asd, mean)
rt_data_mod_asd_means$Condition <- factor(rt_data_mod_asd_means$Condition,
levels=c('SimPeer', 'DisPeer', 'Computer'))
ggplot(data=rt_data_mod_asd_means, aes(x=factor(Run), y=Correct_RT_logz,
group=Condition, color=Condition)) +
geom_smooth(method='lm') +
geom_jitter(alpha=0.2) +
scale_color_manual(values=c('#1f77b4', '#ff7f0e', '#2ca02c'),
labels=c('Similar Peer', 'Dissimilar Peer', 'Computer')) +
theme_classic()
## `geom_smooth()` using formula = 'y ~ x'
In this analysis, we will examine specific contrasts between autistic and non-autistic adolescents. For example, we will test whether there are any group differences in reaction times for similar peers and the computer condition, throughout the task.
model_run_peer_group <- lmer(Correct_RT_logz ~ Run*Condition*group + (1 + Run | participant_id), data = rt_data_mod)
## boundary (singular) fit: see help('isSingular')
summary(model_run_peer_group)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: Correct_RT_logz ~ Run * Condition * group + (1 + Run | participant_id)
## Data: rt_data_mod
##
## REML criterion at convergence: 27742.8
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -3.0188 -0.6911 -0.1137 0.5887 5.5492
##
## Random effects:
## Groups Name Variance Std.Dev. Corr
## participant_id (Intercept) 0.15480 0.3935
## Run 0.02521 0.1588 -1.00
## Residual 0.94332 0.9712
## Number of obs: 9906, groups: participant_id, 121
##
## Fixed effects:
## Estimate Std. Error df
## (Intercept) 1.375e-01 1.018e-01 2.928e+02
## Run -6.459e-02 3.938e-02 2.657e+02
## ConditionComputer -1.035e-02 1.069e-01 9.787e+03
## ConditionDisPeer -7.363e-02 1.068e-01 9.787e+03
## groupnon-autistic 1.693e-01 1.207e-01 2.990e+02
## Run:ConditionComputer 1.782e-02 3.964e-02 9.787e+03
## Run:ConditionDisPeer 3.986e-02 3.962e-02 9.787e+03
## Run:groupnon-autistic -6.393e-02 4.661e-02 2.704e+02
## ConditionComputer:groupnon-autistic -8.702e-02 1.269e-01 9.787e+03
## ConditionDisPeer:groupnon-autistic -4.859e-03 1.268e-01 9.787e+03
## Run:ConditionComputer:groupnon-autistic 4.871e-02 4.699e-02 9.787e+03
## Run:ConditionDisPeer:groupnon-autistic -2.188e-02 4.699e-02 9.787e+03
## t value Pr(>|t|)
## (Intercept) 1.351 0.178
## Run -1.640 0.102
## ConditionComputer -0.097 0.923
## ConditionDisPeer -0.689 0.491
## groupnon-autistic 1.403 0.162
## Run:ConditionComputer 0.449 0.653
## Run:ConditionDisPeer 1.006 0.314
## Run:groupnon-autistic -1.372 0.171
## ConditionComputer:groupnon-autistic -0.686 0.493
## ConditionDisPeer:groupnon-autistic -0.038 0.969
## Run:ConditionComputer:groupnon-autistic 1.037 0.300
## Run:ConditionDisPeer:groupnon-autistic -0.466 0.642
##
## Correlation of Fixed Effects:
## (Intr) Run CndtnC CndtDP grpnn- Rn:CnC Rn:CDP Rn:gr- CndC:-
## Run -0.950
## CondtnCmptr -0.518 0.451
## ConditnDsPr -0.519 0.451 0.494
## gropnn-tstc -0.844 0.801 0.437 0.438
## Rn:CndtnCmp 0.470 -0.498 -0.907 -0.448 -0.397
## Rn:CndtnDsP 0.470 -0.498 -0.448 -0.907 -0.397 0.495
## Rn:grpnn-ts 0.802 -0.845 -0.381 -0.381 -0.950 0.421 0.421
## CndtnCmpt:- 0.436 -0.380 -0.843 -0.416 -0.520 0.765 0.378 0.453
## CndtnDsPr:- 0.437 -0.380 -0.416 -0.842 -0.521 0.377 0.764 0.454 0.495
## Rn:CndtnC:- -0.397 0.420 0.766 0.378 0.473 -0.844 -0.418 -0.500 -0.908
## Rn:CndtDP:- -0.397 0.420 0.378 0.765 0.473 -0.417 -0.843 -0.500 -0.450
## CnDP:- R:CC:-
## Run
## CondtnCmptr
## ConditnDsPr
## gropnn-tstc
## Rn:CndtnCmp
## Rn:CndtnDsP
## Rn:grpnn-ts
## CndtnCmpt:-
## CndtnDsPr:-
## Rn:CndtnC:- -0.450
## Rn:CndtDP:- -0.908 0.496
## optimizer (nloptwrap) convergence code: 0 (OK)
## boundary (singular) fit: see help('isSingular')